812 research outputs found

    International Mobility of Health Professionals: Brain Drain or Brain Exchange?

    Get PDF
    international mobility, health, nurses, doctors

    Touching the invisible: Localizing ultrasonic haptic cues

    Get PDF
    While mid-air gestures offer new possibilities to interact with or around devices, some situations, such as interacting with applications, playing games or navigating, may require visual attention to be focused on a main task. Ultrasonic haptic feedback can provide 3D spatial haptic cues that do not demand visual attention for these contexts. In this paper, we present an initial study of active exploration of ultrasonic haptic virtual points that investigates the spatial localization with and without the use of the visual modality. Our results show that, when providing haptic feedback giving the location of a widget, users perform 50% more accurately compared to providing visual feedback alone. When provided with a haptic location of a widget alone, users are more than 30% more accurate than when given a visual location. When aware of the location of the haptic feedback, active exploration decreased the minimum recommended widget size from 2cm2 to 1cm2 when compared to passive exploration from previous studies. Our results will allow designers to create better mid-air interactions using this new form of haptic feedback

    Contemporary Union Organizing in the UK—Back to the Future?

    Get PDF
    Attempts to revitalize trade unions in the UK have had mixed results, leading to calls for more radical organizing strategies. This paper examines a recent organizing campaign in the UK public sector that involved a shift from an approach that focused on the development of rank-and-file leadership and worker engagement to one that prioritized member recruitment. The paper argues that a focus on recruitment is not necessarily inimical to union revitalization, but this depends on the extent to which it is used to develop new activists and to strengthen the ability of local unions to provide effective representation

    An experimental investigation of an incompressible wall jet impinging on a receiver with spill port

    Get PDF
    A wall jet impinking on a receiver with a spill port located at 90 with respect to the wall jet was investigated. The effects of receiver width and length on the flow field were studied for a range of downstream loading conditions varying from fully opened to completely blocked

    Alfred: A System for Prompted Weak Supervision

    Full text link
    Alfred is the first system for programmatic weak supervision (PWS) that creates training data for machine learning by prompting. In contrast to typical PWS systems where weak supervision sources are programs coded by experts, Alfred enables users to encode their subject matter expertise via natural language prompts for language and vision-language models. Alfred provides a simple Python interface for the key steps of this emerging paradigm, with a high-throughput backend for large-scale data labeling. Users can quickly create, evaluate, and refine their prompt-based weak supervision sources; map the results to weak labels; and resolve their disagreements with a label model. Alfred enables a seamless local development experience backed by models served from self-managed computing clusters. It automatically optimizes the execution of prompts with optimized batching mechanisms. We find that this optimization improves query throughput by 2.9x versus a naive approach. We present two example use cases demonstrating Alfred on YouTube comment spam detection and pet breeds classification. Alfred is open source, available at https://github.com/BatsResearch/alfred.Comment: ACL 2023 System Demonstration Trac

    Follow-Up Differential Descriptions: Language Models Resolve Ambiguities for Image Classification

    Full text link
    A promising approach for improving the performance of vision-language models like CLIP for image classification is to extend the class descriptions (i.e., prompts) with related attributes, e.g., using brown sparrow instead of sparrow. However, current zero-shot methods select a subset of attributes regardless of commonalities between the target classes, potentially providing no useful information that would have helped to distinguish between them. For instance, they may use color instead of bill shape to distinguish between sparrows and wrens, which are both brown. We propose Follow-up Differential Descriptions (FuDD), a zero-shot approach that tailors the class descriptions to each dataset and leads to additional attributes that better differentiate the target classes. FuDD first identifies the ambiguous classes for each image, and then uses a Large Language Model (LLM) to generate new class descriptions that differentiate between them. The new class descriptions resolve the initial ambiguity and help predict the correct label. In our experiments, FuDD consistently outperforms generic description ensembles and naive LLM-generated descriptions on 12 datasets. We show that differential descriptions are an effective tool to resolve class ambiguities, which otherwise significantly degrade the performance. We also show that high quality natural language class descriptions produced by FuDD result in comparable performance to few-shot adaptation methods.Comment: Code: https://github.com/BatsResearch/fud

    Zero-Shot Learning with Common Sense Knowledge Graphs

    Full text link
    Zero-shot learning relies on semantic class representations such as hand-engineered attributes or learned embeddings to predict classes without any labeled examples. We propose to learn class representations from common sense knowledge graphs. Common sense knowledge graphs are an untapped source of explicit high-level knowledge that requires little human effort to apply to a range of tasks. To capture the knowledge in the graph, we introduce ZSL-KG, a general-purpose framework with a novel transformer graph convolutional network (TrGCN) for generating class representations. Our proposed TrGCN architecture computes non-linear combinations of the node neighbourhood and shows improvements on zero-shot learning tasks in language and vision. Our results show ZSL-KG outperforms the best performing graph-based zero-shot learning framework by an average of 2.1 accuracy points with improvements as high as 3.4 accuracy points. Our ablation study on ZSL-KG with alternate graph neural networks shows that our TrGCN adds up to 1.2 accuracy points improvement on these tasks
    • 

    corecore